Noise Resistible Network for Unsupervised Domain Adaptation on Person Re-Identification

نویسندگان

چکیده

Unsupervised domain adaptation on person re-identification (re-ID), which adapts the model trained source dataset to target dataset, has drawn increasing attention over past few years. It is more practical than traditional supervised methods when applied in real-world scenarios since they require a huge number of manual annotations specific domain, unrealistic and even under personal privacy concerns. Currently, pseudo label-based method one most promising solutions this area. However, such methods, label noise ignored remains challenge hindering further performance improvements. To solve problem, paper proposes novel unsupervised re-ID framework named Noise Resistible Network (NRNet), mainly consists two dual-stream networks. For thing, during generation, NRNet utilizes network, denoted as clustering generate discriminative features unseen for clustering, reducing noise. another, avoid problem close loop amplification conventional other network temporally average constructed outside learn how identify images same person. In addition, networks are designed with guiding mechanism, allows shallow representative feature embedding from deep network. Extensive experimental results widely-used benchmark datasets, i.e., Market-1501 DukeMTMC-reID demonstrate that our proposed outperforms state-of-the-art methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Temporal Model Adaptation for Person Re-identification

Person re-identification is an open and challenging problem in computer vision. Majority of the efforts have been spent either to design the best feature representation or to learn the optimal matching metric. Most approaches have neglected the problem of adapting the selected features or the learned model over time. To address such a problem, we propose a temporal model adaptation scheme with ...

متن کامل

Camera Style Adaptation for Person Re-identification

Being a cross-camera retrieval task, person reidentification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camer...

متن کامل

Person re-identification by unsupervised video matching

Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or imagesequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in r...

متن کامل

Harmonious Attention Network for Person Re-Identification

Existing person re-identification (re-id) methods either assume the availability of well-aligned person bounding box images as model input or rely on constrained attention selection mechanisms to calibrate misaligned images. They are therefore sub-optimal for re-id matching in arbitrarily aligned person images potentially with large human pose variations and unconstrained auto-detection errors....

متن کامل

Hierarchical Cross Network for Person Re-identification

Person re-identification (person re-ID) aims at matching target person(s) grabbed from different and non-overlapping camera views. It plays an important role for public safety and has application in various tasks such as, human retrieval, human tracking, and activity analysis. In this paper, we propose a new network architecture called Hierarchical Cross Network (HCN) to perform person re-ID. I...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2021

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2021.3071134